absolute loss
- Asia > China > Jiangsu Province > Nanjing (0.04)
- North America > United States > New York (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > South Holland > Delft (0.04)
- North America > United States > Wisconsin (0.04)
- North America > Canada (0.04)
- (3 more...)
- Europe > Italy > Lombardy > Milan (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
The Limits of Learning with Missing Data
Brian Bullins, Elad Hazan, Tomer Koren
The primary objective of linear regression is to determine the relationships between multiple variables and how they may affect a certain outcome. A standard example is that of medical diagnosis, whereby the data gathered for a given patient provides information about their susceptibility to certain illnesses. A major drawback to this process is the work necessary to collect the data, as it requires running numerous tests for each person, some of which may be discomforting. In such cases it may be necessary to impose limitations on the amount of data available for each example.
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > California > Santa Clara County > Mountain View (0.04)
- Europe > United Kingdom > Scotland > City of Edinburgh > Edinburgh (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Europe > Italy > Lombardy > Milan (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Europe > Netherlands > South Holland > Delft (0.04)
- North America > United States > Wisconsin (0.04)
- North America > Canada (0.04)
- (3 more...)
Deep Learning for VWAP Execution in Crypto Markets: Beyond the Volume Curve
Volume-Weighted Average Price (VWAP) is arguably the most prevalent benchmark for trade execution as it provides an unbiased standard for comparing performance across market participants. However, achieving VWAP is inherently challenging due to its dependence on two dynamic factors, volumes and prices. Traditional approaches typically focus on forecasting the market's volume curve, an assumption that may hold true under steady conditions but becomes suboptimal in more volatile environments or markets such as cryptocurrency where prediction error margins are higher. In this study, I propose a deep learning framework that directly optimizes the VWAP execution objective by bypassing the intermediate step of volume curve prediction. Leveraging automatic differentiation and custom loss functions, my method calibrates order allocation to minimize VWAP slippage, thereby fully addressing the complexities of the execution problem. My results demonstrate that this direct optimization approach consistently achieves lower VWAP slippage compared to conventional methods, even when utilizing a naive linear model presented in arXiv:2410.21448. They validate the observation that strategies optimized for VWAP performance tend to diverge from accurate volume curve predictions and thus underscore the advantage of directly modeling the execution objective. This research contributes a more efficient and robust framework for VWAP execution in volatile markets, illustrating the potential of deep learning in complex financial systems where direct objective optimization is crucial. Although my empirical analysis focuses on cryptocurrency markets, the underlying principles of the framework are readily applicable to other asset classes such as equities.
Reviews: Online Prediction with Selfish Experts
In this paper, the standard (binary outcome) "prediction with expert advice" setting is altered by treating the experts themselves as players in the game. Specifically, restricting to the class of weight-update algorithms, the experts seek to maximize the weight assigned to them by the algorithm, as a notion of "rating" for their predictions. It is shown that learning algorithms are IC, meaning that experts reveal their true beliefs to the algorithm, if and only if the weight-update function is a strictly proper scoring rule. Because the weighted-majority algorithm (WM) has a weight update which is basically the negative of the loss function, WN is thus IC if and only if the (negative) loss is strictly proper. As the (negative) absolute loss function is not strictly proper, WM is not IC for absolute loss.
Reviews: Best Response Regression
Setting: There are two regression learners, and "opponent" and "agent". The opponent has some fixed but unknown hypothesis. The agent observes samples consisting of data pairs x,y along with the absolute loss of the opponent's strategy. The agent will produce a hypothesis. On a given data point, the agent "wins" if that hypothesis has smaller absolute loss than the opponent's.
The Pareto Regret Frontier
Performance guarantees for online learning algorithms typically take the form of regret bounds, which express that the cumulative loss overhead compared to the best expert in hindsight is small. In the common case of large but structured expert sets we typically wish to keep the regret especially small compared to simple experts, at the cost of modest additional overhead compared to more complex others. We study which such regret trade-offs can be achieved, and how.
- Europe > Poland (0.04)
- Oceania > Australia > Queensland (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)